3,576 research outputs found

    On the cost of delayed currency fixing announcements

    Get PDF
    In Foreign Exchange Markets vanilla and barrier options are traded frequently. The market standard is a cutoff time of 10:00 a.m. in New York for the strike of vanillas and a knock-out event based on a continuously observed barrier in the inter bank market. However, many clients, particularly from Italy, prefer the cutoff and knock-out event to be based on the fixing published by the European Central Bank on the Reuters Page ECB37. These barrier options are called discretely monitored barrier options. While these options can be priced in several models by various techniques, the ECB source of the fixing causes two problems. First of all, it is not tradable, and secondly it is published with a delay of about 10 - 20 minutes. We examine here the effect of these problems on the hedge of those options and consequently suggest a cost based on the additional uncertainty encountered. --exotic options,currency fixings

    Low Diameter Graph Decompositions by Approximate Distance Computation

    Get PDF
    In many models for large-scale computation, decomposition of the problem is key to efficient algorithms. For distance-related graph problems, it is often crucial that such a decomposition results in clusters of small diameter, while the probability that an edge is cut by the decomposition scales linearly with the length of the edge. There is a large body of literature on low diameter graph decomposition with small edge cutting probabilities, with all existing techniques heavily building on single source shortest paths (SSSP) computations. Unfortunately, in many theoretical models for large-scale computations, the SSSP task constitutes a complexity bottleneck. Therefore, it is desirable to replace exact SSSP computations with approximate ones. However this imposes a fundamental challenge since the existing constructions of low diameter graph decomposition with small edge cutting probabilities inherently rely on the subtractive form of the triangle inequality, which fails to hold under distance approximation. The current paper overcomes this obstacle by developing a technique termed blurry ball growing. By combining this technique with a clever algorithmic idea of Miller et al. (SPAA 2013), we obtain a construction of low diameter decompositions with small edge cutting probabilities which replaces exact SSSP computations by (a small number of) approximate ones. The utility of our approach is showcased by deriving efficient algorithms that work in the CONGEST, PRAM, and semi-streaming models of computation. As an application, we obtain metric tree embedding algorithms in the vein of Bartal (FOCS 1996) whose computational complexities in these models are optimal up to polylogarithmic factors. Our embeddings have the additional useful property that the tree can be mapped back to the original graph such that each edge is "used" only logaritmically many times, which is of interest for capacitated problems and simulating CONGEST algorithms on the tree into which the graph is embedded

    Challenges in preservation (planning)

    Get PDF
    This short paper attempts to highlight some challenges to be tackled by DP research in the next years, taking as a starting point the perspective of preservation planning. These challenges are in short: (1) Scalability (up and down) requiring (2) measurement of relevant decision factors, in turn requiring (3) benchmarking and ground truth. (4) Quality-aware emulation. (5) Move from the current closed-systems approach to open structures that accomodate evolving knowledge. (6) Move from post-obsolescence actions to \u27longevity engineering\u27

    Near-Optimal Approximate Shortest Paths and Transshipment in Distributed and Streaming Models

    Full text link
    We present a method for solving the transshipment problem - also known as uncapacitated minimum cost flow - up to a multiplicative error of 1+ε1 + \varepsilon in undirected graphs with non-negative edge weights using a tailored gradient descent algorithm. Using O~()\tilde{O}(\cdot) to hide polylogarithmic factors in nn (the number of nodes in the graph), our gradient descent algorithm takes O~(ε2)\tilde O(\varepsilon^{-2}) iterations, and in each iteration it solves an instance of the transshipment problem up to a multiplicative error of polylogn\operatorname{polylog} n. In particular, this allows us to perform a single iteration by computing a solution on a sparse spanner of logarithmic stretch. Using a randomized rounding scheme, we can further extend the method to finding approximate solutions for the single-source shortest paths (SSSP) problem. As a consequence, we improve upon prior work by obtaining the following results: (1) Broadcast CONGEST model: (1+ε)(1 + \varepsilon)-approximate SSSP using O~((n+D)ε3)\tilde{O}((\sqrt{n} + D)\varepsilon^{-3}) rounds, where D D is the (hop) diameter of the network. (2) Broadcast congested clique model: (1+ε)(1 + \varepsilon)-approximate transshipment and SSSP using O~(ε2)\tilde{O}(\varepsilon^{-2}) rounds. (3) Multipass streaming model: (1+ε)(1 + \varepsilon)-approximate transshipment and SSSP using O~(n)\tilde{O}(n) space and O~(ε2)\tilde{O}(\varepsilon^{-2}) passes. The previously fastest SSSP algorithms for these models leverage sparse hop sets. We bypass the hop set construction; computing a spanner is sufficient with our method. The above bounds assume non-negative edge weights that are polynomially bounded in nn; for general non-negative weights, running times scale with the logarithm of the maximum ratio between non-zero weights.Comment: Accepted to SIAM Journal on Computing. Preliminary version in DISC 2017. Abstract shortened to fit arXiv's limitation to 1920 character

    Staying with the Hubble Trouble

    Get PDF
    This thesis investigates the tension between the expansion rate, also known as the Hubble-Lemaître constant, H0 , measurements by a number of independent early- and late-time observables. In the first part, we consider an alternative to the standard theory of gravity, the generalised Proca (GP) theory, which can potentially alleviate the Hubble tension. Focusing on the GP Lagrangian at cubic order – the Cubic Vector Galilen (CVG) model – we derive the simplified equations for gravity and vector modes and implement them in a modified version of the ECOSMOG N-body code and augmented it further with ray-tracing modules taken from Ray-RAMSES. Accordingly, we conduct the first broad simulation study of a cosmologies based on the CVG theory. They explore the formation, evolution and clustering of dark matter based on matter, halo, weak lensing and voids statistics. In the second part, we attempt to answer whether systematic errors in strong gravitational time delay measurements could partly explain the Hubble tension. We quantify the impact of line-of-sight structures on time-delay measurements and in turn, on the inferred value of H0 , and test the reliability of existing procedures for correcting for these line-of-sight effects. In that pursuit we create realistic lightcones using multiple lens plane ray-tracing to create a set of simulated strong lensing systems that are derived from the CosmoDC2 semi-analytical extra- galactic catalogue
    corecore